Generative AI Statement

The rapid development of Generative Artificial Intelligence presents our community with incredible opportunities and distinctive challenges. Various forms of AI and machine learning have long formed a part of our academic life at Wesleyan; tools such as Google Translate are already familiar to our community as challenges to specific forms of learning, like language acquisition and writing assistance. Nevertheless, the appearance of new Generative AI tools presents us with the occasion to revisit our thinking and our policies. Tools powered by GenAI have the potential to further our academic mission, since they can inspire creativity and automate complex processes. But these same tools can also pose a threat to our mission when they interfere with student learning or when they are used to violate our policies on academic integrity and/or data privacy.

Just as a graphing calculator or translation machine may be permissible in certain contexts and inappropriate in others, so too the appropriate use of AI is varied across disciplines and subject matters. This means that students need clarity on the use of Generative AI in each of their different courses.

Because of these different aspects and uses we encourage faculty to share their expectations on the use of GenAI with their students in each course syllabus. If you plan on using TurnItIn or any other AI detection tool, please share this information with your students. Please also inform them of any explicit or quantitative metrics you will implement if you use these detection tools. If you encourage experimentation with AI, detail how and what uses of AI tools are appropriate. As you formulate these expectations, please bear the following in mind:   

  1. The library maintains a list of resources on Generative AI, located here.
  2. GenAI tools and detection tools are constantly evolving. You may need to update your syllabus statement accordingly.
  3. Tools that rely on Large Language Models–such as ChatGPT, Microsoft Copilot, and the most recent versions of Grammarly–can be used not only to write and generate information but also to proofread, brainstorm, summarize documents, extract information, format and analyze data, create images, generate and debug code, and so forth. Help your students to understand how these use cases intersect with your course’s learning goals, specifying where these tools may be useful and where they might interfere with learning and skill acquisition. In some cases, this may actually cause you to reconsider the learning goals for your course.
  4. TurnItIn and other AI detection tools can generate false positives. Do not rely solely on the results of an AI detection tool when considering whether to speak with a student about a possible violation of the Honor Code.
  5. Maximally restrictive AI policies should consider prohibiting the use of AI tools even in the research, preliminary, or proofreading stages of work.
  6. Maximally encouraging AI policies should instruct students to appropriately acknowledge and cite their use of these tools. Our library website has helpful information on citation practices for these tools, located here. These policies should also bear in mind that GenAI has its limits and may hallucinate.
  7. All syllabus policies should acknowledge that different courses will have different policies.
  8. Violations of academic conduct should be assessed according to each syllabus and based on the use of GenAI described or prohibited therein. Talk to your student if you suspect that something may be amiss. Alleged violations should be brought before the Honor Board.
  9. When using AI tools–including detection tools–uploading student work and/or Personally Identifiable Information may violate our University’s Generative AI Usage Policy.
  10. Questions about the use of generative AI in teaching can be directed to the appropriate Divisional Dean. Questions about the Honor Code can be directed to Kevin Butler, kbutler@wesleyan.edu, Director of Community Standards.